专利摘要:
The present invention relates to a robust method of detecting obstacles in a given geographical area. The method comprises a step of pre-detection by processing measurements taken in said zone by several exteroceptive sensors including at least one lidar type sensor and at least one non-lidar type sensor, in particular of the radar or camera type. . It further comprises a step of confirming the pre-detection step, including comparing a cloud of points measured by the lidar type sensor in said zone with map data corresponding to said zone. Application: autonomous driving
公开号:FR3079034A1
申请号:FR1852188
申请日:2018-03-14
公开日:2019-09-20
发明作者:Marie-Anne Mittet;Olivier Cayol;Marc LAVABRE
申请人:Renault SAS;Nissan Motor Co Ltd;
IPC主号:
专利说明:

Robust method of obstacle detection, especially for autonomous vehicles
Technical area :
The present invention relates to the field of autonomous driving, that is to say motor vehicles without driver or in which the driver can be released at least temporarily from any driving task. The invention applies in particular, but not exclusively, to passenger vehicles.
Prior art and technical problem:
A prerequisite for autonomous driving is knowledge of the general environment of the vehicle, which is commonly referred to as "awareness" according to Anglo-Saxon terminology. It is not only a matter of recognizing permanent road infrastructure, such as the road, road markings or traffic signs, but also of detecting temporary obstacles, in particular other vehicles and pedestrians, whether they are mobile or immobile. In addition to the constantly evolving nature of this environment, which requires a continuous questioning of the knowledge base acquired, a major difficulty is the diversity of the situations that can arise, which is due in particular to the diversity of nature of the obstacles. In order to detect the obstacles surrounding the vehicle, it is necessary to use perception sensors. However, they may suffer from detection error, or worse from non-detection. This is a problem which the present invention proposes to address.
In order to overcome these difficulties, the strategy commonly adopted to optimize the robustness of obstacle detection consists in diversifying sensor technologies, in particular in combining or "merging" data from cameras, radars or even lidars. The idea being that, if one of the sensor technologies misses a detection, i.e. does not signal the presence of an obstacle very present, then it is hoped that the other technologies compensate for the error by detecting. Despite this diversification of sensor technologies, there are still many problematic situations, such as the one illustrated in Figure 1. In this figure, a traffic sign is on board a service vehicle. In this situation where the service vehicle is positioned on the side of the track, the camera wrongly categorizes it as a traffic sign, therefore representing no danger (“Detection missed” according to English terminology). The radar considers the service vehicle as a “ghost” or “ghost” echo according to Anglo-Saxon terminology, that is to say it considers it not as a real object, but as an echo parasite to be neglected, representing no danger either ("Detection missed" according to English terminology).
Finally, the lidars perceive this truck ("Detection OK"), but are not considered as a minority in number. Currently, lidar data can be in the form of raw data called "point clouds". But to date, it is not these point clouds that are taken into account in the process of fusion to establish the detections of unforeseen obstacles. The data fusion uses the detected objects resulting from the treatments carried out on these point clouds.
We therefore understand that, in the particular situation of the service vehicle carrying a traffic sign (as in other situations which will be detailed later), the diversification of sensor technologies creates confusion, or even an apparent inconsistency: how interpret the inconsistency between the data from the lidar (which detects an obstacle) and the other sensors (which detect nothing) Is there a real obstacle on the road not detected by the merger process Or is it a false alarm due to a malfunction of the lidars In any case, the detection is missed ("Missed object" according to English terminology). This is a problem which the present invention proposes to solve.
In order to solve this problem, document US2013242284 proposes a method for detecting obstacles by merging not only data from radar and camera systems, but also data from lidar. By supplementing conventional perception systems with the lidar point cloud, this solution makes it possible to improve detection robustness: there are fewer real objects not detected. But there are still missing objects, often due to the apparent inconsistency between the data we are trying to merge, which this solution cannot always overcome. Thus, it is most likely that the service vehicle with a sign on the side will not be detected, since the lidar data alone would probably not be enough for the merger process to confirm a detection. This is again a problem which the present invention proposes to solve.
Summary of the invention:
The purpose of the invention is in particular to reduce the number of missed objects and to reduce false detections, this by resolving a posteriori the inconsistencies between the lidar point cloud and the mapping data. To this end, the invention relates to a method for detecting obstacles in a given geographical area. The method comprises a step of pre-detection by processing of measurements taken in said zone by several exteroceptive sensors including at least one sensor of the lidar type and at least one sensor of the non-lidar type, in particular of the radar type or of the camera type. . It also includes a step of confirming the predetection step, including comparing a cloud of points measured by the lidar type sensor in said area with map data corresponding to said area.
Advantageously, the comparison of the point cloud with the cartographic data can include calculating a level of correlation between the data of said cloud and said cartographic data.
In one embodiment, the level of correlation can be an increasing function of the amount of consistent data between the cloud data and the cartographic data and can be:
if none of the data in the point cloud correlates with cartographic data;
if all the data in the point cloud correlates with cartographic data.
Advantageously, if the non-lidar sensor detects an obstacle in the area, but the lidar sensor does not detect said obstacle and the level of correlation between the lidar point cloud and the cartographic data is close to 1, then the detection of said obstacle by the non-lidar sensor can be overridden.
Advantageously, if the non-lidar sensor does not detect any obstacle in the area, but the lidar sensor detects an obstacle in the area and the level of correlation between the lidar point cloud and the cartographic data is distant from 1, then the detection of said obstacle by the lidar sensor can be confirmed or invalidated according to other criteria.
Advantageously, if the non-lidar sensor detects an obstacle in the area, and the lidar sensor also detects said obstacle in the area and the level of correlation between the lidar point cloud and the cartographic data in which said obstacle has been added detected is distant from 1, then the detection of said obstacle by the non-lidar sensor and by the lidar sensor can be confirmed or invalidated according to other criteria.
Advantageously, if the non-lidar sensor does not detect any obstacle in the area, and the lidar sensor does not detect any obstacle in the area either and the level of correlation between the lidar point cloud and the cartographic data is close to 1 , then the absence of detection by the non-lidar sensor and by the lidar sensor can be confirmed.
For example, other criteria may include thresholds for correlation level.
The subject of the invention is also a system comprising hardware and software means for implementing such a method, as well as a motor vehicle comprising such a system.
Benefits :
The main advantage of the present invention is still its low cost, since it uses sensors and computers already on board in most current vehicles.
Description of the figures:
Other characteristics and advantages of the invention will become apparent from the following description given with reference to the appended drawings which represent:
- Figure 1, by a screenshot, an example of a method according to the prior art;
- Figures 2 to 5, by diagrams, diagrams and screenshots, an embodiment of the invention.
Description of the invention from the figures:
Lidar is a state-of-the-art technology that accurately represents the environment around the sensor. This not only makes it possible to measure distances, but also to measure the reflective nature of the objects around it. Thus, it is possible to obtain a 3D point cloud representing the environment. For each point, we can also associate a reflectivity value. The invention is based on a comparison between this point cloud and a "signature" of what is expected on the road, this signature being deduced from a 3D card stored in the vehicle. The 3D map used contains information on the infrastructure (lines, barriers, signs, etc.) but is devoid of any other object.
In order to be able to make this comparison, it is necessary to precisely locate the vehicle in the 3D map that we want to use. Once located, it is possible to compare the measurement made with the signature of the expected route. The signature of the expected route is possibly supplemented by the object or objects detected by the cameras and / or the radars. If the point cloud from the lidar corresponds perfectly to the signature, then there is consistency and we can say with certainty that there are no obstacles in front of the vehicle on the road (in particular no missed object) . On the other hand, as soon as the point cloud does not correspond to the signature, then there is inconsistency and risk that an obstacle in front of the vehicle has been missed. However, it is important to note that it will be almost impossible to get a perfect match. Indeed, several phenomena can impact the measurement. We can distinguish several sources of error: the imprecision of the location of the vehicle, as well as the roll and pitch effects it undergoes, the errors present in the maps used, or a malfunction of the lidar. The present invention also proposes to deal with these different sources of error. In the present embodiment, the specifications envisaged for satisfactory performance are the following:
- the size of the smallest object to be detected is a few centimeters: this represents the size from which the vehicle tire can be damaged in the event of an impact,
- the field of vision is opened 180 ° towards the front of the vehicle;
- the field of vision extends laterally not only beyond the lateral strips delimiting the tracks, but also up to 50 meters beyond the emergency stop strips and safety barriers, (it must be possible to detect a pedestrian on the side of the road);
- the field of vision extends longitudinally beyond 200 meters (vehicle stopping margin);
- the field of vision extends to 25 meters in height (which corresponds to a height of 5 meters to pass over a truck, followed by a 10% slope over 200 meters).
Given all of these factors, a level of correlation should be defined, i.e. the development of a value that represents the level of correlation between the 3D map and the data from the lidar. . If these two pieces of information correlate perfectly, then we have a correlation level equal to 1. Conversely, if these two data sources do not correlate at all, we will have a correlation level equal to 0.
In order to assess any inconsistency between the measurements of the sensors and the 3D card, the invention proposes a processing logic illustrated by FIGS. 3 and 4, which detects the objects missed by the conventional sensors, while eliminating false ones. detections.
If we consider one of the essential aims of this invention, namely to use the 3D map and the lidar as an aid for obstacle detection, we can summarize the different situations encountered in the following table, which shows the illustrated cases by figure 2:
Detection sensors Case 1 Case 2 Case 3 Case 4 Camera + Radar Objects detected No objects detected Objects detected No objects detected lidar A cloud of dots Point cloud + Point cloud + A cloud of dots objects objectsComparison with Consistency with There exists a Consistency with Consistency with 3D map (+ level of map & Xi inconsistency between the card & X3 card & X4 associated correlation)map and cloud lidar & X 2Conclusion Presence of object in Presence of object in Presence of object in Presence of object independing on the value of depending on the value of depending on the value of depending on the value ofXi X 2 xs X4
It is important to remember that the camera and radar type sensors return objects and not raw data. Regarding the lidar, we consider that it returns raw data. We thus have the raw point clouds returned by the sensor directly. Furthermore, we consider here that the vehicle location error is controlled and known.
We can describe the following cases:
- Case 1:
o Camera + Radar: “detected objects” means that the camera and the radar both detected objects present in the scene. At this stage, we do not know if these detections are relevant or not, it can be a false "ghost" detection, such as those presented in the case of the service truck described above.
o Lidar: "point cloud" means that, as we mentioned before, lidar is a sensor that creates point clouds of the environment in which it operates. At this point, it returns a cloud, but no objects.
o Comparison with the 3D map: in this case we can expect the comparison between the point cloud given by the lidar and the 3D map to be consistent. That is, the difference between these two pieces of information is close to zero. However, it is necessary to control this with a level of correlation. A consistency map can illustrate geographically the lack of inconsistency.
o Conclusion: in this case we consider that it is expected that there is in reality no object on the road, that these are “false” detections resulting from the merger between the radar and camera. Indeed, when comparing the 3D card and the lidar, several thousand 3D points are compared (measuring capacity of the lidar: 300 ° 000 points / second) with a highly resolved 3D card. It is therefore believed that the correlation between these two highly resolved elements cannot be false everywhere in the cloud and on the map. Thus, a high level of correlation cannot result from an error in the measurement of the lidar (itself composed of several measurement bodies)
- Case 2:
o Camera + radar: "no object detected" means that neither the camera nor the radar have detected an object, or that the merger of the measurements of the two sensors does not return an object.
o Lidar: "point cloud + objects" means that the lidar returns a point cloud, and from this, one or more objects could be detected.
o Comparison with the 3D map: there is an inconsistency between the 3D map and the data from the lidar. As before, we also estimate a level of correlation between the data from the lidar and the 3D map. The value of this level of correlation will therefore inform us about the presence or not of objects. A consistency map, or inconsistency in this case, can be used to identify inconsistencies geographically.
o Conclusion: as presented above, we know that sources of inconsistency can arise due to different phenomena. Therefore depending on these different phenomena, and the value of the level of correlation, we will be able to conclude on the actual presence of objects or not.
- Case 3:
o Camera + radar: we find ourselves in the same scenario as the first case presented, namely that the camera, the radar and / or the fusion of these two sources has or have detected an object.
o Lidar: "point cloud + objects" means that the lidar returns a point cloud, and from this one or more objects could be detected.
o Comparison with the 3D map: the objects detected by the various sensors are directly "added" to the map, in particular thanks to the fact that we know the position of the vehicle on the map. We then carry out our comparison operation in order to determine the level of correlation. Likewise, a consistency or inconsistency map can be used to identify inconsistencies geographically.
o Conclusion: as presented above, we know that sources of inconsistency can arise due to different phenomena. Therefore depending on these different phenomena, and the value of the level of correlation, we will be able to conclude on the actual presence of objects or not.
- Case 4:
o Camera + radar: in this case, neither the radar nor the camera, nor the fusion of the two, return objects.
o Lidar: as in the first case, the lidar here returns a cloud of points.
o Comparison with the 3D map: again, we compare the data from the sensors and the 3D map to determine a level of correlation. Again, a consistency map can illustrate the lack of inconsistency geographically.
o Conclusion: in this case, we expect the level of correlation to be high, which would mean an absence of object.
As illustrated in FIG. 2, representing the different cases mentioned in the table above, a coherence map can be established. It is also used to position the various objects that could be detected.
FIGS. 3 and 4 respectively illustrate an example of an algorithm and an example of software architecture implementing the invention.
As illustrated in FIG. 5, in an advantageous embodiment, two lidars can make it possible to obtain two point clouds with different perspectives and thus robustify the function. The use of several lidars to carry out the invention presented has several advantages. This makes the correlation calculation more reliable. Furthermore, this has the advantage of allowing a means of checking the good health of each of the lidars. Indeed, it is possible to compare each of their measurement. In the present exemplary embodiment, “Road DNA” (registered trademark) designates a very high resolution cartographic base marketed by TomTom.
In a simplified but advantageous embodiment aiming to limit both the load on the computer and the allocated memory space, the same processing operations according to the invention can be carried out using only 2D mapping. In this case, the mapped objects are the horizontal signaling elements, commonly called "white lines" or simply "lines". Indeed, as we have presented before, lidars measure a distance but also have the ability to raise the level of reflectivity of objects. By nature, the markings (lines, arrows, etc.) are very reflective and are therefore detectable thanks to the data from the lidars. Thus, as soon as a line or a portion of a mapped line is not seen by the lidars and no object has been detected, these same treatments using a level of correlation according to the invention, very inexpensive, can make it possible to detect additional objects, which had not been detected by cameras and radars, such as for example a service vehicle carrying a traffic sign, as soon as this masks a horizontal line.
In this 2D embodiment, it may be advantageous to secure non-detection of lines by the lidars, so as not to be deceived in cases where a line or a portion of line is not simply masked by an object , but bluntly deleted. It therefore seems advisable to use several sensors in order to redundate them. In the event of a detection of infrastructure degradation (line deleted, etc.), we can then take advantage of the “Vehicle-to-vehicle” (V2V) or “Vehicle-toInfrastructure” (V2I) installations in order to send messages to a database server hosted in the "cloud" according to Anglo-Saxon terminology. If messages accumulate beyond a predefined threshold at the server level, the latter can then alert to a problem with the freshness of the card at this location and broadcast a message to vehicles, always by the 'via V2V or V2I connections, or trigger the actions necessary to update the card.
The present invention can also be useful in order to detect the malfunction of a sensor. Indeed, it is possible to count the inconsistencies in a given area of the inconsistency map. We can then determine a threshold from which we consider that the inconsistency is due to a sensor malfunction.
Other advantages :
The invention described above also has the main advantage, in addition to improving the robustness of target detection, of being inexpensive: in fact, lidar is an economically advantageous technology, the devices becoming fully electric. In addition, the 5 processing needs are relatively low, given the computing capacity, once we manage to compress the 3D mapping with a reasonable scale.
Finally, a variant of the invention could be the positioning of lidars on the side of the road. Indeed, instead of positioning a LIDAR on the vehicle, we could imagine a perception system installed on the side of the road, like on a lamppost for example. This remote lidar could carry out obstacle detection and inform vehicles. Also, the lidar could be placed on a drone scanning the road from the sky. Again, the drone informs the vehicles in the event of obstacle detection.
权利要求:
Claims (10)
[1" id="c-fr-0001]
1. Method for detecting obstacles in a given geographical area, the method comprising a step of pre-detection by processing of measurements taken in said area by several exteroceptive sensors including at least one lidar type sensor and at least one a non-lidar type, in particular of the radar or camera type, the method being characterized in that it further comprises a step of confirming the pre-detection step, including comparing a cloud of points measured by the lidar type sensor in said area with map data corresponding to said area.
[2" id="c-fr-0002]
2. Method according to claim 1, characterized in that the comparison of the point cloud with the cartographic data includes calculating a level of correlation between the data of said cloud and said cartographic data.
[3" id="c-fr-0003]
3. Method according to claim 2, characterized in that the level of correlation is an increasing function of the quantity of coherent data between the data of the cloud and the cartographic data and is worth:
- 0 if none of the data in the point cloud correlates with cartographic data;
- 1 if all the data in the point cloud correlates with cartographic data.
[4" id="c-fr-0004]
4. Method according to claim 3, characterized in that, if the non-lidar sensor detects an obstacle in the area, but the lidar sensor does not detect said obstacle and that the level of correlation between the lidar point cloud and the cartographic data is close to 1, so the detection of said obstacle by the non-lidar sensor is invalidated.
[5" id="c-fr-0005]
5. Method according to claim 3, characterized in that, if the non-lidar sensor detects no obstacle in the area, but the lidar sensor detects an obstacle in the area and that the level of correlation between the lidar point cloud and the cartographic data is distant from 1, then the detection of said obstacle by the lidar sensor is confirmed or invalidated according to other predefined criteria.
[6" id="c-fr-0006]
6. Method according to claim 3, characterized in that, if the non-lidar sensor detects an obstacle in the area, and the lidar sensor also detects said obstacle in the area and that the level of correlation between the lidar point cloud and the cartographic data to which said detected obstacle has been added is distant from 1, then the detection of said obstacle by the non-lidar sensor and by the lidar sensor is confirmed or invalidated according to other predefined criteria.
[7" id="c-fr-0007]
7. Method according to claim 3, characterized in that, if the non-lidar sensor does not detect any obstacle in the area, and the lidar sensor does not detect any obstacle in the area either and that the level of correlation between the cloud of lidar points and the map data is close to 1, so the absence of detection by the non-lidar sensor and by the lidar sensor is confirmed.
[8" id="c-fr-0008]
8. Method according to claim 5 or 6, characterized in that the other criteria include correlation level thresholds.
[9" id="c-fr-0009]
9. System comprising hardware and software means for implementing any one of the preceding claims.
[10" id="c-fr-0010]
10. Motor vehicle comprising a system according to the preceding claim.
类似技术:
公开号 | 公开日 | 专利标题
EP3765868A1|2021-01-20|Robust method for detecting obstacles, in particular for autonomous vehicles
EP3126864B1|2020-07-15|Method for geolocating the environment of a carrier
FR3054673B1|2019-06-14|MERGING DETECTION DATA AND FOLLOWING OBJECTS FOR MOTOR VEHICLE
EP3767328A1|2021-01-20|Method for verifying the integrity of lidar data
FR3052727A1|2017-12-22|METHOD FOR DETERMINING A REFERENCE DRIVING CLASS
FR3088308A1|2020-05-15|METHOD FOR MEASURING A LEVEL OF WEAR OF A VEHICLE TIRE.
FR3085082A1|2020-02-21|ESTIMATION OF THE GEOGRAPHICAL POSITION OF A ROAD VEHICLE FOR PARTICIPATORY PRODUCTION OF ROAD DATABASES
US20180203100A1|2018-07-19|Quality metric for ranging sensor in a degraded visual environment for a situational awareness system
WO2018041978A1|2018-03-08|Device for determining a speed limit, on-board system comprising such a device, and method for determining a speed limit
FR3047217B1|2019-08-16|DEVICE FOR DETERMINING THE STATE OF A SIGNALING LIGHT, EMBEDY SYSTEM COMPRISING SUCH A DEVICE, VEHICLE COMPRISING SUCH A SYSTEM AND METHOD OF DETERMINING THE SAME
EP3008664B1|2019-10-23|Method and system for monitoringobjects in motion
US10859397B2|2020-12-08|Method and device for creating a map
KR102255924B1|2021-05-25|Vehicle and method for detecting lane
FR3057693A1|2018-04-20|LOCATION DEVICE AND DEVICE FOR GENERATING INTEGRITY DATA
FR3104112A1|2021-06-11|Method and system for determining, on board a motor vehicle, a path followed by the vehicle on a roadway on which a traffic lane appears and / or disappears
WO2019170863A1|2019-09-12|Method for detecting an anomaly in the perception by a motor vehicle of its environment
FR3106553A1|2021-07-30|Method and device for processing vehicle environment data
FR3107114A1|2021-08-13|Method and device for validating mapping data of a vehicle road environment
FR3105961A1|2021-07-09|Method and device for determining a lane change indicator for a vehicle
FR3036498A1|2016-11-25|METHOD AND SYSTEM FOR ONLINE LOCATION OF A MOTOR VEHICLE
FR3096469A1|2020-11-27|Method and system for identifying an object in the environment of a motor vehicle
FR3106215A1|2021-07-16|Vehicle environment data communication method and device
FR3100651A1|2021-03-12|Method and device for detecting an object for a vehicle
FR3098329A1|2021-01-08|Process for automatic entry into a predefined motor vehicle accident report form.
FR3061955A1|2018-07-20|SYSTEM AND METHOD FOR DETECTING A EVOLUTION CONTEXT OF A VEHICLE
同族专利:
公开号 | 公开日
WO2019175130A1|2019-09-19|
FR3079034B1|2020-02-21|
EP3765868A1|2021-01-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US9052721B1|2012-08-28|2015-06-09|Google Inc.|Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection|
US8996228B1|2012-09-05|2015-03-31|Google Inc.|Construction zone object detection using light detection and ranging|
US9221396B1|2012-09-27|2015-12-29|Google Inc.|Cross-validating sensors of an autonomous vehicle|CN110654380A|2019-10-09|2020-01-07|北京百度网讯科技有限公司|Method and device for controlling a vehicle|US9128185B2|2012-03-15|2015-09-08|GM Global Technology Operations LLC|Methods and apparatus of fusing radar/camera object data and LiDAR scan points|US20200049511A1|2018-08-07|2020-02-13|Ford Global Technologies, Llc|Sensor fusion|
法律状态:
2019-03-22| PLFP| Fee payment|Year of fee payment: 2 |
2019-09-20| PLSC| Search report ready|Effective date: 20190920 |
2020-03-19| PLFP| Fee payment|Year of fee payment: 3 |
2021-03-23| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
申请号 | 申请日 | 专利标题
FR1852188A|FR3079034B1|2018-03-14|2018-03-14|ROBUST METHOD FOR DETECTING OBSTACLES, IN PARTICULAR FOR AUTONOMOUS VEHICLES|
FR1852188|2018-03-14|FR1852188A| FR3079034B1|2018-03-14|2018-03-14|ROBUST METHOD FOR DETECTING OBSTACLES, IN PARTICULAR FOR AUTONOMOUS VEHICLES|
PCT/EP2019/056074| WO2019175130A1|2018-03-14|2019-03-12|Robust method for detecting obstacles, in particular for autonomous vehicles|
EP19708864.4A| EP3765868A1|2018-03-14|2019-03-12|Robust method for detecting obstacles, in particular for autonomous vehicles|
[返回顶部]